Variance reduction for sequential sampling in stochastic programming
نویسندگان
چکیده
This paper investigates the variance reduction techniques Antithetic Variates (AV) and Latin Hypercube Sampling (LHS) when used for sequential sampling in stochastic programming presents a comparative computational study. It shows conditions under which with AV LHS satisfy finite stopping guarantees are asymptotically valid, discussing detail. computationally compares their use both non-sequential settings through collection of two-stage linear programs different characteristics. The numerical results show that while can be preferable to random either setting, typically dominates setting performing well sequentially gains some advantages setting. These imply that, given ease implementation these techniques, armed same theoretical properties improved empirical performance relative sampling, procedures present attractive alternatives practice class programs.
منابع مشابه
A Sequential Sampling Procedure for Stochastic Programming
We develop a sequential sampling procedure for a class of stochastic programs. We assume that a sequence of feasible solutions with an optimal limit point is given as input to our procedure. Such a sequence can be generated by solving a series of sampling problems with increasing sample size, or it can be found by any other viable method. Our procedure estimates the optimality gap of a candidat...
متن کاملSequential Importance Sampling Algorithms for Dynamic Stochastic Programming
This paper gives a comprehensive treatment of EVPI-based sequential importance sampling algorithms for dynamic (multistage) stochastic programming problems. Both theory and computational algorithms are discussed. Under general assumptions it is shown that both expected value of perfect information (EVPI) processes and the marginal EVPI process (the supremum norm of the conditional expectation o...
متن کاملStochastic Variance Reduction for Policy Gradient Estimation
Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems. However, the variance of the performance gradient estimates obtained from the simulation is often excessive, leading to poor sample efficiency. In this paper, we apply the stochastic variance reduced gradient descent (SVRG) technique [1] to model-free p...
متن کاملOnline Variance Reduction for Stochastic Optimization
Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data. This might degrade the convergence by yielding estimates that suffer from a high variance. A possible remedy is to employ non-uniform importance sampling techniques, which take the structure of the dataset into account. In this work, we investigate a recently pr...
متن کاملVariance Reduction for Stochastic Gradient Optimization
Stochastic gradient optimization is a class of widely used algorithms for training machine learning models. To optimize an objective, it uses the noisy gradient computed from the random data samples instead of the true gradient computed from the entire dataset. However, when the variance of the noisy gradient is large, the algorithm might spend much time bouncing around, leading to slower conve...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Annals of Operations Research
سال: 2021
ISSN: ['1572-9338', '0254-5330']
DOI: https://doi.org/10.1007/s10479-020-03908-x